I’ve written the http2 explained document and I’ve done several talks about HTTP/2. I’ve gotten a lot of questions about TLS in association with HTTP/2 due to this, and I want to address some of them here.
TLS is not mandatory
In the HTTP/2 specification that has been approved and that is about to become an official RFC any day now, there is no language that mandates the use of TLS for securing the protocol. On the contrary, the spec clearly explains how to use it both in clear text (over plain TCP) as well as over TLS. TLS is not mandatory for HTTP/2.
TLS mandatory in effect
While the spec doesn’t force anyone to implement HTTP/2 over TLS but allows you to do it over clear text TCP, representatives from both the Firefox and the Chrome development teams have expressed their intents to only implement HTTP/2 over TLS. This means HTTPS:// URLs are the only ones that will enable HTTP/2 for these browsers. Internet Explorer people have expressed that they intend to also support the new protocol without TLS, but when they shipped their first test version as part of the Windows 10 tech preview, that browser also only supported HTTP/2 over TLS. As of this writing, there has been no browser released to the public that speaks clear text HTTP/2. Most existing servers only speak HTTP/2 over TLS.
The difference between what the spec allows and what browsers will provide is the key here, and browsers and all other user-agents are all allowed and expected to each select their own chosen path forward.
If you’re implementing and deploying a server for HTTP/2, you pretty much have to do it for HTTPS to get users. And your clear text implementation will not be as tested…
A valid remark would be that browsers are not the only HTTP/2 user-agents and there are several such non-browser implementations that implement the non-TLS version of the protocol, but I still believe that the browsers’ impact on this will be notable.
Stricter TLS
When opting to speak HTTP/2 over TLS, the spec mandates stricter TLS requirements than what most clients ever have enforced for normal HTTP 1.1 over TLS.
It says TLS 1.2 or later is a MUST. It forbids compression and renegotiation. It specifies fairly detailed “worst acceptable” key sizes and cipher suites. HTTP/2 will simply use safer TLS.
Another detail here is that HTTP/2 over TLS requires the use of ALPN which is a relatively new TLS extension, RFC 7301, which helps us negotiate the new HTTP version without losing valuable time or network packet round-trips.
TLS-only encourages more HTTPS
Since browsers only speak HTTP/2 over TLS (so far at least), sites that want HTTP/2 enabled must do it over HTTPS to get users. It provides a gentle pressure on sites to offer proper HTTPS. It pushes more people over to end-to-end TLS encrypted connections.
This (more HTTPS) is generally considered a good thing by me and us who are concerned about users and users’ right to privacy and right to avoid mass surveillance.
Why not mandatory TLS?
The fact that it didn’t get in the spec as mandatory was because quite simply there was never a consensus that it was a good idea for the protocol. A large enough part of the working group’s participants spoke up against the notion of mandatory TLS for HTTP/2. TLS was not mandatory before so the starting point was without mandatory TLS and we didn’t manage to get to another stand-point.
When I mention this in discussions with people the immediate follow-up question is…
No really, why not mandatory TLS?
The motivations why anyone would be against TLS for HTTP/2 are plentiful. Let me address the ones I hear most commonly, in an order that I think shows the importance of the arguments from those who argued them.
1. A desire to inspect HTTP traffic
There is a claimed “need” to inspect or intercept HTTP traffic for various reasons. Prisons, schools, anti-virus, IPR-protection, local law requirements, whatever are mentioned. The absolute requirement to cache things in a proxy is also often bundled with this, saying that you can never build a decent network on an airplane or with a satellite link etc without caching that has to be done with intercepts.
Of course, MITMing proxies that terminate SSL traffic are not even rare these days and HTTP/2 can’t do much about limiting the use of such mechanisms.
2. Think of the little ones
“Small devices cannot handle the extra TLS burden“. Either because of the extra CPU load that comes with TLS or because of the cert management in a billion printers/fridges/routers etc. Certificates also expire regularly and need to be updated in the field.
Of course there will be a least acceptable system performance required to do TLS decently and there will always be systems that fall below that threshold.
3. Certificates are too expensive
The price of certificates for servers are historically often brought up as an argument against TLS even it isn’t really HTTP/2 related and I don’t think it was ever an argument that was particularly strong against TLS within HTTP/2. Several CAs now offer zero-cost or very close to zero-cost certificates these days and with the upcoming efforts like letsencrypt.com, chances are it’ll become even better in the not so distant future.
Recently someone even claimed that HTTPS limits the freedom of users since you need to give personal information away (he said) in order to get a certificate for your server. This was not a price he was willing to pay apparently. This is however simply not true for the simplest kinds of certificates. For Domain Validated (DV) certificates you usually only have to prove that you “control” the domain in question in some way. Usually by being able to receive email to a specific receiver within the domain.
4. The CA system is broken
TLS of today requires a PKI system where there are trusted certificate authorities that sign certificates and this leads to a situation where all modern browsers trust several hundred CAs to do this right. I don’t think a lot of people are happy with this and believe this is the ultimate security solution. There’s a portion of the Internet that advocates for DANE (DNSSEC) to address parts of the problem, while others work on gradual band-aids like Certificate Transparency and OCSP stapling to make it suck less.
My personal belief is that rejecting TLS on the grounds that it isn’t good enough or not perfect is a weak argument. TLS and HTTPS are the best way we currently have to secure web sites. I wouldn’t mind seeing it improved in all sorts of ways but I don’t believe running protocols clear text until we have designed and deployed the next generation secure protocol is a good idea – and I think it will take a long time (if ever) until we see a TLS replacement.
Who were against mandatory TLS?
Yeah, lots of people ask me this, but I will refrain from naming specific people or companies here since I have no plans on getting into debates with them about details and subtleties in the way I portrait their arguments. You can find them yourself if you just want to and you can most certainly make educated guesses without even doing so.
What about opportunistic security?
A text about TLS in HTTP/2 can’t be complete without mentioning this part. A lot of work in the IETF these days are going on around introducing and making sure opportunistic security is used for protocols. It was also included in the HTTP/2 draft for a while but was moved out from the core spec in the name of simplification and because it could be done anyway without being part of the spec. Also, far from everyone believes opportunistic security is a good idea. The opponents tend to say that it will hinder the adoption of “real” HTTPS for sites. I don’t believe that, but I respect that opinion because it is a guess as to how users will act just as well as my guess is they won’t act like that!
Opportunistic security for HTTP is now being pursued outside of the HTTP/2 spec and allows clients to upgrade plain TCP connections to instead do “unauthenticated TLS” connections. And yes, it should always be emphasized: with opportunistic security, there should never be a “padlock” symbol or anything that would suggest that the connection is “secure”.
Firefox supports opportunistic security for HTTP and it will be enabled by default from Firefox 37.
Translations
ПоÑÑ‚ доÑтупен на Ñайте softdroid.net: ВоÑÑтановление: TLS в HTTP/2. (Russian)
TLS in HTTP/2 (Kazakh)
You can find the companies against encryption in the Internet at
http://www.atis.org/openweballiance/about.asp
This consortium was formed very late in the HTTP2 process to remove consumer protection and exploit personal information for financial gain. Have a read of their documentation and form your own opinion.
greg
> “It forbids compression […]”
Is this compression in the tls-protocol or also http-compression? I guess this is to prevent against CRIME-like-attacks, but I can’t figure out why http-compression wouldn’t also be a problem. On the other hand disabling http-compression seems bad.
I think you severely underestimate the monetary cost of enabling tls. You also need a webhost to host the domain and the cheap (2$/mo) once usually doesn’t allow tls. Saying that cost is not an issue sounds a bit like “let them eat cake”. Is the preferred outcome for the tls-proponents really that we all switch to Cloudflare to hide the insecure traffic from browsers? I really don’t like that the new webstandards (e.g. http2, serviceworker) are being taken hostage in this way.
Here is another article on the companies against encryption of internet traffic:
http://www.infoworld.com/article/2855738/internet-privacy/consortium-opposes-your-privacy.html
Google et. al. get a lot of blame for privacy intrusion (sometimes rightly), but in this case they’ve definitely tried to defend privacy.
Also interesting that Microsoft is listed as a OWA-member.
Anders Says:
> > “It forbids compression […]â€
> Is this compression in the tls-protocol or also http-compression? I guess this
This is about TLS compression and this is as you say to prevent CRIME-like-attacks. HTTP2 comes with header compression. HTTP payload compression is allowed in HTTP2. There was a proposal to make support for it mandatory, however this failed during a discussion of content-encoding vs. transfer-encoding.
>My personal belief is that rejecting TLS on the grounds that it isn’t good enough or not perfect is a weak argument.
It’s not just a weak argument, it’s a fallacious one. It’s called the Nirvana Fallacy when solutions to problems are rejected because they are not perfect.
Is there any way to force Chrome & Firefox to implement HTTP/2 without TLS?
The problem here is that implementing HTTP/2 only with TLS means there any !no alternative! in case there will be serious problem with TLS.
The technology without alternative is a direct way to self-kill.
@Alexander: the way is to implement it yourself (== doable) and convince the world it wants that functionality (== hard). Both have the source code available.
In case there will be serious problem with TLS, then we fix it. Switching away from a secure protocol to an unsecure one because there are problems in the secure one? It doesn’t make any sense as a backup solution.
Was there an argument for needs of non-browser implementation ever given? I’m considering HTTP/2 for back-end communications where 100-Continue is essential, but is hampered in 1.1 by certain people who refuse to implement it in libraries, because “it’s not the standard”. If I’m reading HTTP/2 right, it would put this obstructionism to rest with its frames and flow control mechanisms. But TLS is rather useless and even harmful in the specific system I have in mind.
Pete: I can’t recall any specific ones, no. There are several non-browser implementations already supporting HTTP/2 without TLS…
Daniel, what if you will not able to fix that? What if to fix it you’ll need to update all known browsers and force all users to use new versions? What if until the fix actually take effect all sites will be compromised?
The problem in HTTPS is that it can decrease security more than increase. Even known public vulnerabilities allows not only to decrypt the traffic, but to hack the system, because of issues in protocol, libraries or implementations.
HSTS super cookie allows to spy on any visitor and connect anonymous sessions to real user. (Example http://www.radicalresearch.co.uk/lab/hstssupercookies )
And that’s only public things, there are a lot private vulnerabilities. TLS encrypting the traffic to fight spying on visitors by government and 3rd parties, but openes anyther !more dangerous! vulnerabilities to spy more on you and to hack website system itself (and now steal all information about every visitor or do something much more bad).
There must be an alternative and ability to easily and safely disable HTTPS temporary or forever for each webmaster while staying top notch on HTTP/2 performance.
Always must be an alternative. Chrome, Firefox and all others must not force TLS. IETF must force corporations to not force TLS for HTTP/2.
I don’t appreciate the caricature of people who have concerns about the mandatory TLS requirement.
TLS quite obviously serves many important purposes, as evidenced by how it is used with HTTP/1.1. A world without TLS (or an equivalent) would be a very bad world indeed! The only issue is with whether it should be *mandatory* with HTTP/2 (as opposed to, say, strongly recommended). Mandatory here could refer to the spec or “in effect”, but obviously we’re talking about the “in effect” case. (In any event, little separates a de facto standard and a written standard, if they’re followed by everyone.)
For whoever is in the (extremely powerful, yet unacknowledged) position of making implementation choices for our browsers, there are two main reasons to make TLS mandatory: simplicity (of implementation/adoption) and a sense of ethical responsibility. The first isn’t much of a reason: the implementation/maintenance cost of non-TLS is trivial, and adopters won’t find the TLS/non-TLS split any more complicated than with HTTP/1.1. So, the primary motive for mandatory TLS is in fact an *ethical* one.
But if the primary reason is ethical, has an ethical study of any kind been done? In particular, have the implications of various possible outcomes been considered? I’d imagine much of this has been done informally (and to a large extent, unwittingly), via mailing lists, blogs like this one, and so forth. But that process is subject to many biases (particularly confirmation bias) and much fallacious reasoning. For example, h3xx raises the nirvana fallacy — perhaps correctly — but arguably *mandatory* TLS is more culpable of committing a nirvana fallacy, due to its denial of real problems that TLS can’t solve (i.e. anything that can be done, can be done over TLS). Whether or not you believe that depends on how you want to interpret the argument mish-mash that’s been taking place.
For instance, clearly to you “almost free” = “free”. Whereas to me, the concept of transaction costs ( http://en.wikipedia.org/wiki/Transaction_cost ) means that whether or not “almost free” = “free” depends on the application — in particular, doing anything at scale could make “almost free” in fact not very free at all. I’m sure you’ve discovered this yourself while doing performance optimisation on code.
Now, my concerns about the (effectively) mandatory TLS requirement aren’t absolute. In particular, I’m looking for solutions to the important problems that appear to be ruled out by mandatory TLS (that would otherwise need to fall back to plain HTTP/1.1.) Most of these problems revolve around complexity, decentralisation and scale (due to transaction costs). An interesting real-world example came up on Mozilla Hacks not too long ago: https://hacks.mozilla.org/2015/02/embedding-an-http-web-server-in-firefox-os/#comment-17067 . That particular problem is something I still have no solution for in an all-TLS world.
Nonetheless, solutions *are* beginning to appear for these problems, such as with Let’s Encrypt, which solves some of the complexity and cost issues. In other cases, I’m just not sure what the solutions are, other than HTTP/1.1 fallback. Which in itself is actually quite fine, except that implementers are now proposing that many new browser features (beyond even speed-ups and security) will be restricted to HTTP/2/TLS — again, raising my concerns that experimentation in the future will be crippled. Albeit, the problem will be restricted to only a small set of cases. But those few cases frequently involve decentralisation, and so are potentially extremely critical for a world wide web that’s slowly over-centralising.
AFAIK, the alt-svc includes only support for “h2”. Is support for “http” planned? It would allow a much ftaster adoption of opportunistinc encryption (server side support fot http/2 is not widely available on stable distros 🙂 ).
Making TLS not mandatory is one thing, but it looks like the browser vendors will take care of it by just not implementing h2a.
However, as someone who works with proxies in wireless environments I see a missed opportunity here. Even if all browsers are capable of HTTP/2, a lot of servers out there will only serve HTTP/1.1 for a while. Possibly for a very long time, who knows.
A transparent proxy doing HTTP/1.1 on the Internet and HTTP/2 on the radio side gives mobile subscribers some latency-benefits. Something which is currently being done by very kludgy workaround like different TCP congestion windows and algorithms, buffer sizes, URL rewriting to private URLs to emulate a CDN (prefecthing and caching objects) and a lot more.
Think in terms of bandwidth-delay product and the such; if a connection isn’t up for some time you can have as much bandwidth in LTE-A as you want, if the latency is crap and not much traffic traverses an individual connection, you don’t make any use of it.
That’s one of the reasons there are so many so called “Performance Enhancement Proxy”-products out there. (Now someone will shout at me…)
A simple protocol upgrade in a proxy would mitigate that. And if it’s already HTTP/2 anyway, no one needs to touch the traffic.
I’m not saying that anyone should break up encrypted connections (that’s a no-brainer, though some operators are infamous for that); I’m simply talking about consolidating multiple concurrent TCP-connections to a single one.
Just my 2 cents…